perm filename HAUGEL[F86,JMC]5 blob
sn#833971 filedate 1987-02-08 generic text, type C, neo UTF8
COMMENT ā VALID 00004 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 \input memo.tex[let,jmc]
C00005 00003 \vfill\eject\end
C00006 00004 haugel[f86,jmc] Review of John Haugland's ``Artificial Intelligence:
C00012 ENDMK
Cā;
\input memo.tex[let,jmc]
\noindent {\it Artificial Intelligence: The Very Idea}. By JOHN HAUGELAND.
The MIT Press, Cambridge, MA, 1985. xii + 290 pp. $14.95.
ISBN 0-262-08153-9. A Bradford Book.
John Haugeland, a philosopher, has got the ``very idea'' essentially
correct in this determinedly non-technical book. Unfortunately,
discussing the philosophy of AI non-technically has imposes as severe
limitations as does a non-technical discussion of the philosophy of
mathematics or quantum mechanics. Besides that, he omits to
discuss the use of mathematical logic in AI --- which could
be discussed non-technically to a considerable extent. We begin
with the positive content of the book, after which we will discuss
the limitations of the non-technical approach which characterizes
almost all writing about AI by philosophers, even in the professional
philosophical literature.
First of all, Haugeland has got right the polarization between
the scoffers and the boosters of AI --- the self-assurance of both
sides about the main philosophical issue. The scoffers say it's
ridiculous --- ``like imagining that your car (really) hates you''
vs. the view that it's only a matter of time until we understand
intelligence well enough to program it. This reviewer is a booster
and accepts Haugeland's characterization subject to some qualifications
not important enough to mention.
Second he's right about the abstractness of the AI approach
to intelligence. We consider it inessential whether the intelligence
is implemented by electronics or by neurochemical mechanisms.
\vfill\eject\end
haugel[f86,jmc] Review of John Haugland's ``Artificial Intelligence:
Haugeland perhaps ignores Tarskian semantics.
- perhaps in both senses.
p..98 Chaitin
p. 112 GOFAI doesn't rest on the theory that intelligence is computation.
The theory is that intelligent behavior can be realized computationally.
The extent to which human intelligence is realized digitally is
a matter for psychologists and physiologists.
For example, the chemistry of hormones may play intervene in human
thought processes in an analog way.
The book is misleadingly non-technical.
Logic is ignored.
The state of AI technology.
The idea of AI doesn't actually depend on whether human thinking is
essentially computational, although I think it is substantially true.
Suppose that an important part of thinking is analog. The most
plausible hypothesis in that direction is that the quantitative
amounts of the different hormones that are released determine
certain decisions. Then we might be prepared to supplement the
digital computations representing reasoning by a digital simulation
of the important analog processes. This would work unless these
processes were too extensive to be economically simulated.
Artificial intelligence is a science under development. It has
substantial conceptual problems. Under these conditions it is
not an easy task to summarize the field for the layman --- or
even for the practitioner. Maybe it's as though someone tried
to summarize atomic physics in 1910.
Physical symbol hypothesis
How is meaning possible in a physical/mechanical universe?
An example of how philosophy gets itself entangled.
Perhaps the book's biggest weakness is that it gives little
picture of AI as a research activity. AI researchers only rarely
ask what intelligence {\it is}, while they spend most of their time
asking how computers can be made to do something in particular.
This is illustrated by a problem that has been unsolved since the
1950s. Arthur Samuel (19xx,19xx) wrote programs for playing checkers that
learned to optimize the coefficients of the linear polynomial that
evaluated positions, e.g. it learned the best weights to be ascribed to
the numbers of kings and single men, control of the center and the back
rows and other functions of position discussed in the books about
checkers. It replayed master games and adjusted its coefficients to
predict the moves considered good. However, checker books also contain
information that can't readily be fitted into a position evaluation
function. For example, a king can hold two single men of the opposite
side against the edge of the board so that neither can advance without
being captured. If the opponent allows this to persist until both
sides king their remaining men, the side holding the other's two singles,
will outnumber the opponent by one on the rest of the board, and this
suffices to force exchanges and win. Samuel's program ``would like'' to
advance the two singles, but the learned evaluation function doesn't give
it a high priority and the actual disaster may be 30 moves in the future
--- too far for lookahead. It wouldn't be difficult to modify
the program to take this specific phenomenon into account, but humans learn
such things on the fly.
The unsolved problem is how to make programs, specifically game
playing programs, decompose a situation into subsituations that can be
analyzed separately and whose interaction is subsequently analyzed.
Humans do this all the time, but it seems that quite good checkers and
chess can be played without it --- taking advantage of the computer's high
speed. It is essential in the Japanese game of {\it go}, so there
are no good {\it go} programs yet.